248 research outputs found

    New Bounds for Randomized List Update in the Paid Exchange Model

    Get PDF
    We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d

    Improved Online Algorithms for Knapsack and GAP in the Random Order Model

    Get PDF
    The knapsack problem is one of the classical problems in combinatorial optimization: Given a set of items, each specified by its size and profit, the goal is to find a maximum profit packing into a knapsack of bounded capacity. In the online setting, items are revealed one by one and the decision, if the current item is packed or discarded forever, must be done immediately and irrevocably upon arrival. We study the online variant in the random order model where the input sequence is a uniform random permutation of the item set. We develop a randomized (1/6.65)-competitive algorithm for this problem, outperforming the current best algorithm of competitive ratio 1/8.06 [Kesselheim et al. SIAM J. Comp. 47(5)]. Our algorithm is based on two new insights: We introduce a novel algorithmic approach that employs two given algorithms, optimized for restricted item classes, sequentially on the input sequence. In addition, we study and exploit the relationship of the knapsack problem to the 2-secretary problem. The generalized assignment problem (GAP) includes, besides the knapsack problem, several important problems related to scheduling and matching. We show that in the same online setting, applying the proposed sequential approach yields a (1/6.99)-competitive randomized algorithm for GAP. Again, our proposed algorithm outperforms the current best result of competitive ratio 1/8.06 [Kesselheim et al. SIAM J. Comp. 47(5)]

    New Results for the k-Secretary Problem

    Get PDF
    Suppose that n numbers arrive online in random order and the goal is to select k of them such that the expected sum of the selected items is maximized. The decision for any item is irrevocable and must be made on arrival without knowing future items. This problem is known as the k-secretary problem, which includes the classical secretary problem with the special case k=1. It is well-known that the latter problem can be solved by a simple algorithm of competitive ratio 1/e which is asymptotically optimal. When k is small, only for k=2 does there exist an algorithm beating the threshold of 1/e [Chan et al. SODA 2015]. The algorithm relies on an involved selection policy. Moreover, there exist results when k is large [Kleinberg SODA 2005]. In this paper we present results for the k-secretary problem, considering the interesting and relevant case that k is small. We focus on simple selection algorithms, accompanied by combinatorial analyses. As a main contribution we propose a natural deterministic algorithm designed to have competitive ratios strictly greater than 1/e for small k >= 2. This algorithm is hardly more complex than the elegant strategy for the classical secretary problem, optimal for k=1, and works for all k >= 1. We explicitly compute its competitive ratios for 2 <= k <= 100, ranging from 0.41 for k=2 to 0.75 for k=100. Moreover, we show that an algorithm proposed by Babaioff et al. [APPROX 2007] has a competitive ratio of 0.4168 for k=2, implying that the previous analysis was not tight. Our analysis reveals a surprising combinatorial property of this algorithm, which might be helpful for a tight analysis of this algorithm for general k

    Scheduling with unexpected machine breakdowns

    Get PDF
    AbstractWe investigate an online version of a basic scheduling problem where a set of jobs has to be scheduled on a number of identical machines so as to minimize the makespan. The job processing times are known in advance and preemption of jobs is allowed. Machines are non-continuously available, i.e., they can break down and recover at arbitrary time instances not known in advance. New machines may be added as well. Thus machine availabilities change online. We first show that no online algorithm can construct optimal schedules. We also show that no online algorithm can achieve a bounded competitive ratio if there may be time intervals where no machine is available. Then we present an online algorithm that constructs schedules with an optimal makespan of CmaxOPT if a lookahead of one is given, i.e., the algorithm always knows the next point in time when the set of available machines changes. Finally, we give an online algorithm without lookahead that constructs schedules with a nearly optimal makespan of CmaxOPT+ε, for any ε>0, if at any time at least one machine is available. Our results demonstrate that not knowing machine availabilities in advance is of little harm

    On the Value of Penalties in Time-Inconsistent Planning

    Get PDF
    People tend to behave inconsistently over time due to an inherent present bias. As this may impair performance, social and economic settings need to be adapted accordingly. Common tools to reduce the impact of time-inconsistent behavior are penalties and prohibition. Such tools are called commitment devices. In recent work Kleinberg and Oren [EC, 2014] connect the design of a prohibition-based commitment device to a combinatorial problem in which edges are removed from a task graph G with n nodes. However, this problem is NP-hard to approximate within a ratio less than n^(1/2)/3 [Albers and Kraft, WINE, 2016]. To address this issue, we propose a penalty-based commitment device that does not delete edges, but raises their cost. The benefits of our approach are twofold. On the conceptual side, we show that penalties are up to 1/beta times more efficient than prohibition, where 0 < beta <= 1 parameterizes the present bias. On the computational side, we improve approximability by presenting a 2-approximation algorithm for allocating penalties. To complement this result, we prove that optimal penalties are NP-hard to approximate within a ratio of 1.08192

    On the Value of Job Migration in Online Makespan Minimization

    Full text link
    Makespan minimization on identical parallel machines is a classical scheduling problem. We consider the online scenario where a sequence of nn jobs has to be scheduled non-preemptively on mm machines so as to minimize the maximum completion time of any job. The best competitive ratio that can be achieved by deterministic online algorithms is in the range [1.88,1.9201][1.88,1.9201]. Currently no randomized online algorithm with a smaller competitiveness is known, for general mm. In this paper we explore the power of job migration, i.e.\ an online scheduler is allowed to perform a limited number of job reassignments. Migration is a common technique used in theory and practice to balance load in parallel processing environments. As our main result we settle the performance that can be achieved by deterministic online algorithms. We develop an algorithm that is αm\alpha_m-competitive, for any m2m\geq 2, where αm\alpha_m is the solution of a certain equation. For m=2m=2, α2=4/3\alpha_2 = 4/3 and limmαm=W1(1/e2)/(1+W1(1/e2))1.4659\lim_{m\rightarrow \infty} \alpha_m = W_{-1}(-1/e^2)/(1+ W_{-1}(-1/e^2)) \approx 1.4659. Here W1W_{-1} is the lower branch of the Lambert WW function. For m11m\geq 11, the algorithm uses at most 7m7m migration operations. For smaller mm, 8m8m to 10m10m operations may be performed. We complement this result by a matching lower bound: No online algorithm that uses o(n)o(n) job migrations can achieve a competitive ratio smaller than αm\alpha_m. We finally trade performance for migrations. We give a family of algorithms that is cc-competitive, for any 5/3c25/3\leq c \leq 2. For c=5/3c= 5/3, the strategy uses at most 4m4m job migrations. For c=1.75c=1.75, at most 2.5m2.5m migrations are used.Comment: Revised versio

    Optimal Algorithms for Online b-Matching with Variable Vertex Capacities

    Get PDF
    We study the b-matching problem, which generalizes classical online matching introduced by Karp, Vazirani and Vazirani (STOC 1990). Consider a bipartite graph G = (S ?? R,E). Every vertex s ? S is a server with a capacity b_s, indicating the number of possible matching partners. The vertices r ? R are requests that arrive online and must be matched immediately to an eligible server. The goal is to maximize the cardinality of the constructed matching. In contrast to earlier work, we study the general setting where servers may have arbitrary, individual capacities. We prove that the most natural and simple online algorithms achieve optimal competitive ratios. As for deterministic algorithms, we give a greedy algorithm RelativeBalance and analyze it by extending the primal-dual framework of Devanur, Jain and Kleinberg (SODA 2013). In the area of randomized algorithms we study the celebrated Ranking algorithm by Karp, Vazirani and Vazirani. We prove that the original Ranking strategy, simply picking a random permutation of the servers, achieves an optimal competitiveness of 1-1/e, independently of the server capacities. Hence it is not necessary to resort to a reduction, replacing every server s by b_s vertices of unit capacity and to then run Ranking on this graph with ?_{s ? S} b_s vertices on the left-hand side. From a theoretical point of view our result explores the power of randomization and strictly limits the amount of required randomness. From a practical point of view it leads to more efficient allocation algorithms. Technically, we show that the primal-dual framework of Devanur, Jain and Kleinberg cannot establish a competitiveness better than 1/2 for the original Ranking algorithm, choosing a permutation of the servers. Therefore, we formulate a new configuration LP for the b-matching problem and then conduct a primal-dual analysis. We extend this analysis approach to the vertex-weighted b-matching problem. Specifically, we show that the algorithm PerturbedGreedy by Aggarwal, Goel, Karande and Mehta (SODA 2011), again with a sole randomization over the set of servers, is (1-1/e)-competitive. Together with recent work by Huang and Zhang (STOC 2020), our results demonstrate that configuration LPs can be strictly stronger than standard LPs in the analysis of more complex matching problems

    Scheduling in the Random-Order Model

    Get PDF
    Makespan minimization on identical machines is a fundamental problem in online scheduling. The goal is to assign a sequence of jobs to m identical parallel machines so as to minimize the maximum completion time of any job. Already in the 1960s, Graham showed that Greedy is (2-1/m)-competitive [Graham, 1966]. The best deterministic online algorithm currently known achieves a competitive ratio of 1.9201 [Fleischer and Wahl, 2000]. No deterministic online strategy can obtain a competitiveness smaller than 1.88 [Rudin III, 2001]. In this paper, we study online makespan minimization in the popular random-order model, where the jobs of a given input arrive as a random permutation. It is known that Greedy does not attain a competitive factor asymptotically smaller than 2 in this setting [Osborn and Torng, 2008]. We present the first improved performance guarantees. Specifically, we develop a deterministic online algorithm that achieves a competitive ratio of 1.8478. The result relies on a new analysis approach. We identify a set of properties that a random permutation of the input jobs satisfies with high probability. Then we conduct a worst-case analysis of our algorithm, for the respective class of permutations. The analysis implies that the stated competitiveness holds not only in expectation but with high probability. Moreover, it provides mathematical evidence that job sequences leading to higher performance ratios are extremely rare, pathological inputs. We complement the results by lower bounds for the random-order model. We show that no deterministic online algorithm can achieve a competitive ratio smaller than 4/3. Moreover, no deterministic online algorithm can attain a competitiveness smaller than 3/2 with high probability
    corecore